80 research outputs found

    Statistical Criteria for Shape Fusion and Selection

    Get PDF
    International audience—Surface reconstruction from point clouds often relies on a primitive extraction step, that may be followed by a merging step because of a possible over-segmentation. We present two statistical criteria to decide whether or not two surfaces are to be considered as the same, and thus can be merged. They are based on the statistical tests of Kolmogorov-Smirnov and Mann-Whitney for comparing distributions. Moreover, computation time can be significantly cut down using a reduced sampling based on the Dvoretzky-Keifer-Wolfowitz inequality. The strength of our approach is that it relies in practice on a single intuitive parameter (homogeneous to a distance) and that it can be applied to any shape, including meshes, not just geometric primitives. It also enables the comparison of shapes of different kinds, providing a way to choose between different shape candidates. We show several applications of our method, experimenting geometric primitive (planeand cylinder) detection, selection and fusion, both on precise laser scans and noisy photogrammetric 3D data

    Fast and Robust Normal Estimation for Point Clouds with Sharp Features

    Get PDF
    Proceedings of the 10th Symposium of on Geometry Processing (SGP 2012), Tallinn, Estonia, July 2012.International audienceThis paper presents a new method for estimating normals on unorganized point clouds that preserves sharp fea- tures. It is based on a robust version of the Randomized Hough Transform (RHT). We consider the filled Hough transform accumulator as an image of the discrete probability distribution of possible normals. The normals we estimate corresponds to the maximum of this distribution. We use a fixed-size accumulator for speed, statistical exploration bounds for robustness, and randomized accumulators to prevent discretization effects. We also propose various sampling strategies to deal with anisotropy, as produced by laser scans due to differences of incidence. Our experiments show that our approach offers an ideal compromise between precision, speed, and robustness: it is at least as precise and noise-resistant as state-of-the-art methods that preserve sharp features, while being almost an order of magnitude faster. Besides, it can handle anisotropy with minor speed and precision losses

    Using a Waffle Iron for Automotive Point Cloud Semantic Segmentation

    Full text link
    Semantic segmentation of point clouds in autonomous driving datasets requires techniques that can process large numbers of points efficiently. Sparse 3D convolutions have become the de-facto tools to construct deep neural networks for this task: they exploit point cloud sparsity to reduce the memory and computational loads and are at the core of today's best methods. In this paper, we propose an alternative method that reaches the level of state-of-the-art methods without requiring sparse convolutions. We actually show that such level of performance is achievable by relying on tools a priori unfit for large scale and high-performing 3D perception. In particular, we propose a novel 3D backbone, WaffleIron, made almost exclusively of MLPs and dense 2D convolutions and present how to train it to reach high performance on SemanticKITTI and nuScenes. We believe that WaffleIron is a compelling alternative to backbones using sparse 3D convolutions, especially in frameworks and on hardware where those convolutions are not readily available.Comment: Accepted at ICCV23. Code available at https://github.com/valeoai/WaffleIro
    • …
    corecore